40 research outputs found

    Discretization of Linear Problems in Banach Spaces: Residual Minimization, Nonlinear Petrov-Galerkin, and Monotone Mixed Methods

    Full text link
    This work presents a comprehensive discretization theory for abstract linear operator equations in Banach spaces. The fundamental starting point of the theory is the idea of residual minimization in dual norms, and its inexact version using discrete dual norms. It is shown that this development, in the case of strictly-convex reflexive Banach spaces with strictly-convex dual, gives rise to a class of nonlinear Petrov-Galerkin methods and, equivalently, abstract mixed methods with monotone nonlinearity. Crucial in the formulation of these methods is the (nonlinear) bijective duality map. Under the Fortin condition, we prove discrete stability of the abstract inexact method, and subsequently carry out a complete error analysis. As part of our analysis, we prove new bounds for best-approximation projectors, which involve constants depending on the geometry of the underlying Banach space. The theory generalizes and extends the classical Petrov-Galerkin method as well as existing residual-minimization approaches, such as the discontinuous Petrov-Galerkin method.Comment: 43 pages, 2 figure

    Dispersive and dissipative errors in the DPG method with scaled norms for Helmholtz equation

    Get PDF
    We consider the discontinuous Petrov-Galerkin (DPG) method, wher the test space is normed by a modified graph norm. The modificatio scales one of the terms in the graph norm by an arbitrary positive scaling parameter. Studying the application of the method to the Helmholtz equation, we find that better results are obtained, under some circumstances, as the scaling parameter approaches a limiting value. We perform a dispersion analysis on the multiple interacting stencils that form the DPG method. The analysis shows that the discrete wavenumbers of the method are complex, explaining the numerically observed artificial dissipation in the computed wave approximations. Since the DPG method is a nonstandard least-squares Galerkin method, we compare its performance with a standard least-squares method

    Wavenumber Explicit Analysis of a DPG Method for the Multidimensional Helmholtz Equation

    Get PDF
    We study the properties of a novel discontinuous Petrov Galerkin (DPG) method for acoustic wave propagation. The method yields Hermitian positive definite matrices and has good pre-asymptotic stability properties. Numerically, we find that the method exhibits negligible phase errors (otherwise known as pollution errors) even in the lowest order case. Theoretically, we are able to prove error estimates that explicitly show the dependencies with respect to the wavenumber ω, the mesh size h, and the polynomial degree p. But the current state of the theory does not fully explain the remarkably good numerical phase errors. Theoretically, comparisons are made with several other recent works that gave wave number explicit estimates. Numerically, comparisons are made with the standard finite element method and its recent modification for wave propagation with clever quadratures. The new DPG method is designed following the previously established principles of optimal test functions. In addition to the nonstandard test functions, in this work, we also use a nonstandard wave number dependent norm on both the test and trial space to obtain our error estimates

    Data-Driven Goal-Oriented Finite Element Methods: A~Machine-Learning Minimal-Residual (ML-MRes) Framework

    Get PDF
    We consider the data-driven acceleration of Galerkin-based finite element discretizations for the approximation of partial differential equations (PDEs). The aim is to obtain approximations on meshes that are very coarse, but nevertheless resolve quantities of interest with striking accuracy. Our work is inspired by the the machine learning framework of Mishra (2018), who considered the data-driven acceleration of finite-difference schemes. The essential idea is to optimize a numerical method for a given coarse mesh, by minimizing a loss function consisting of errors with respect to the quantities of interest for obtained training data. Our main contribution lies in the identification of a stable and consistent parametric family of finite element methods on a given mesh. In particular, we consider a general Petrov-Galerkin method, where the trial space is fixed, but the test space has trainable parameters that are to be determined in the offline training process. Finding the optimal test space therefore amounts to obtaining a goal-oriented discretization that is completely tailored for the quantity of interest. The Petrov-Galerkin method is equivalent to a Minimal-Residual formulation, as commonly studied in the context of DPG and optimal Petrov-Galerkin methods. As is natural in deep learning, we use an artificial neural network to define the family of test spaces, whose parameters are learned from the data. Using numerical examples for the Laplacian and advection equation, we demonstrate that the trained method has superior approximation of quantities of interest even on very coarse meshes. [1] I. Brevis, I. Muga, and K. G. van der Zee, A machine-learning minimal-residual (ML-MRes) framework for goal-oriented nite element discretizations, Computers and Mathematics with Applications, to appear, https://doi.org/10.1016/j.camwa.2020.08.012 (2020

    Automatic stabilization of finite-element simulations using neural networks and hierarchical matrices

    Full text link
    Petrov-Galerkin formulations with optimal test functions allow for the stabilization of finite element simulations. In particular, given a discrete trial space, the optimal test space induces a numerical scheme delivering the best approximation in terms of a problem-dependent energy norm. This ideal approach has two shortcomings: first, we need to explicitly know the set of optimal test functions; and second, the optimal test functions may have large supports inducing expensive dense linear systems. Nevertheless, parametric families of PDEs are an example where it is worth investing some (offline) computational effort to obtain stabilized linear systems that can be solved efficiently, for a given set of parameters, in an online stage. Therefore, as a remedy for the first shortcoming, we explicitly compute (offline) a function mapping any PDE-parameter, to the matrix of coefficients of optimal test functions (in a basis expansion) associated with that PDE-parameter. Next, as a remedy for the second shortcoming, we use the low-rank approximation to hierarchically compress the (non-square) matrix of coefficients of optimal test functions. In order to accelerate this process, we train a neural network to learn a critical bottleneck of the compression algorithm (for a given set of PDE-parameters). When solving online the resulting (compressed) Petrov-Galerkin formulation, we employ a GMRES iterative solver with inexpensive matrix-vector multiplications thanks to the low-rank features of the compressed matrix. We perform experiments showing that the full online procedure as fast as the original (unstable) Galerkin approach. In other words, we get the stabilization with hierarchical matrices and neural networks practically for free. We illustrate our findings by means of 2D Eriksson-Johnson and Hemholtz model problems.Comment: 28 pages, 16 figures, 4 tables, 6 algorithm

    The Discrete-Dual Minimal-Residual Method (DDMRes) for Weak Advection-Reaction Problems in Banach Spaces

    Get PDF
    © 2019 Walter de Gruyter GmbH, Berlin/Boston 2019. We propose and analyze a minimal-residual method in discrete dual norms for approximating the solution of the advection-reaction equation in a weak Banach-space setting. The weak formulation allows for the direct approximation of solutions in the Lebesgue Lp, 1 < p < ∞. The greater generality of this weak setting is natural when dealing with rough data and highly irregular solutions, and when enhanced qualitative features of the approximations are needed. We first present a rigorous analysis of the well-posedness of the underlying continuous weak formulation, under natural assumptions on the advection-reaction coefficients. The main contribution is the study of several discrete subspace pairs guaranteeing the discrete stability of the method and quasi-optimality in L p {L^{p}}, and providing numerical illustrations of these findings, including the elimination of Gibbs phenomena, computation of optimal test spaces, and application to 2-D advection

    Neural Control of Discrete Weak Formulations: Galerkin, Least Squares & Minimal-Residual Methods with Quasi-Optimal Weights

    Get PDF
    There is tremendous potential in using neural networks to optimize numerical methods. In this paper, we introduce and analyse a framework for the neural optimization of discrete weak formulations, suitable for finite element methods. The main idea of the framework is to include a neural-network function acting as a control variable in the weak form. Finding the neural control that (quasi-) minimizes a suitable cost (or loss) functional, then yields a numerical approximation with desirable attributes. In particular, the framework allows in a natural way the incorporation of known data of the exact solution, or the incorporation of stabilization mechanisms (e.g., to remove spurious oscillations). The main result of our analysis pertains to the well-posedness and convergence of the associated constrained-optimization problem. In particular, we prove under certain conditions, that the discrete weak forms are stable, and that quasi-minimizing neural controls exist, which converge quasi-optimally. We specialize the analysis results to Galerkin, least squares and minimal-residual formulations, where the neural-network dependence appears in the form of suitable weights. Elementary numerical experiments support our findings and demonstrate the potential of the framework
    corecore